32 research outputs found

    Towards Zero Training for Brain-Computer Interfacing

    Get PDF
    Electroencephalogram (EEG) signals are highly subject-specific and vary considerably even between recording sessions of the same user within the same experimental paradigm. This challenges a stable operation of Brain-Computer Interface (BCI) systems. The classical approach is to train users by neurofeedback to produce fixed stereotypical patterns of brain activity. In the machine learning approach, a widely adapted method for dealing with those variances is to record a so called calibration measurement on the beginning of each session in order to optimize spatial filters and classifiers specifically for each subject and each day. This adaptation of the system to the individual brain signature of each user relieves from the need of extensive user training. In this paper we suggest a new method that overcomes the requirement of these time-consuming calibration recordings for long-term BCI users. The method takes advantage of knowledge collected in previous sessions: By a novel technique, prototypical spatial filters are determined which have better generalization properties compared to single-session filters. In particular, they can be used in follow-up sessions without the need to recalibrate the system. This way the calibration periods can be dramatically shortened or even completely omitted for these ‘experienced’ BCI users. The feasibility of our novel approach is demonstrated with a series of online BCI experiments. Although performed without any calibration measurement at all, no loss of classification performance was observed

    Analyse von NichtstationaritÀten in EEG-Signalen zur Verbesserung der Leistung von Gehirn-Computer Schnittstellen

    No full text
    Ein Brain-Computer Interface (BCI, “Gehirn-Computer-Schnittstelle”) ist ein System, das neuronale Kommandos in Steuersignale umsetzt. Diese können genutzt werden, um Anwendungen wie Texteingabeprogramme, elektrische RollstĂŒhle oder Neuroprothesen zu steuern. Ein BCI kann beispielsweise Schwerstbehinderten zur Kommunikation verhelfen, oder auch gesunden Benutzern einen zusĂ€tzlichen Kanal zur Mensch-Maschine-Interaktion bieten. Im klassischen Ansatz, der “operanten Konditionierung”, mussten Benutzer in einem wochenbis monatelangen Training ihre Gehirnstrommuster an die Wirkungsweise des Systems anpassen. Das Berliner Brain-Computer Interface (BBCI) hingegen hat ein auf dem Elektroenzephalogramm (EEG) basierendes System entwickelt, das durch den Einsatz von neuartigen Methoden des maschinellen Lernens keine Konditionierung mehr benötigt. Hierbei passen sich Klassifikatoren automatisch an die Daten an, die zwischen den Benutzern oft stark variieren. So können selbst Benutzer, die zum ersten Mal mit einem BCI arbeiten, hohe Informationstransferraten erzielen. Nach der anfĂ€nglichen Kalibrierung sind die Gehirnströme jedoch selten so stationĂ€r, dass der Klassifikator der ersten Sitzung auch spĂ€ter erfolgreich angewandt werden kann. Selbst bei Klassifikatoren vom gleichen Tag können manchmal lĂ€ngere Abschnitte auftreten, in denen die Übertragungsraten sehr niedrig sind. Durch diese Probleme wird der permanente Gebrauch des Systems beeintrĂ€chtigt, der gerade fĂŒr Behinderte besonders wichtig ist. Der Grund dafĂŒr ist die Nicht-StationaritĂ€t in EEG-Signalen. Durch VerĂ€nderung der charakteristischen Eigenschaften der Daten wird die Klassifikation oft in Mitleidenschaft gezogen. In dieser Dissertation werde ich eine Theorie fĂŒr die Analyse nicht-stationĂ€rer Daten entwickeln, die Methoden fĂŒr die Quantifizierung und Visualisierung nicht-stationĂ€rer Prozesse beinhaltet. Anhand der Analyse von Daten aus BCI-Experimenten werde ich die Effizienz dieser Methoden veranschaulichen. Insbesondere werde ich neurophysiologische Anhaltspunkte fĂŒr Quellen der Nicht-StationaritĂ€t aufzeigen. Sind die Prozesse bekannt, die der Nicht-StationaritĂ€t zugrunde liegen, kann man die Klassifikation durch Adaptation verbessern. Hierzu werde ich einige erstaunlich einfache Methoden entwickeln. Abschliessend werde ich Klassifikatoren konstruieren, die gegenĂŒber VerĂ€nderungen von einer experimentellen Sitzung zur nĂ€chsten weitgehend robust sind. Diese neuartige Kategorie von Klassifikatoren kann ohne anfĂ€ngliche Kalibrierung angewandt werden und hat daher das Potential, die tĂ€gliche Benutzbarkeit von BCI-Systemen zu ermöglichen. Obwohl ausschliesslich BCI-Daten zur Auswertung herangezogen wurden, können die Methoden auf eine Vielzahl von Problemen angewandt werden. Nicht-StationaritĂ€t kann in jedem Bereich des maschinellen Lernens auftreten, sobald sich die Eigenschaften der beobachteten Systeme zeitabhĂ€ngig verĂ€ndern.Brain-Computer Interface (BCI) research aims at the automatic translation of neural commands into control signals. These can then be used to control applications such as text input programs, electrical wheelchairs or neuroprostheses. A BCI system can, e.g., serve as a communication option for severely disabled patients or as an additional man-machine interaction channel for healthy users. In the classical “operant conditioning” approach, subjects had to undergo weeks or months of training to adjust their brain signals to the use of the system. The Berlin Brain-Computer Interface project (BBCI) has developed an Electroencephalogram-(EEG-)based system which overcomes the need for operant conditioning with advanced machine learning methods. By adapting classifiers to the highly subject-specific brain signals, even subjects with no prior experience in BCI can achieve high information transfer rates from their first session. However, after an initial calibration, the brain signals are rarely so stationary that the first classifier can be reused in the next experimental session. Even if the classifier was fitted to the subject on data from the same day, we sometimes encountered long periods of low performances. These drawbacks can clearly impede the continuous use of the system, which is particularly important for disabled people. The reason for this flaw is the nonstationarity in the EEG data. Due to changes in the characteristic properties of the data, classification can often be corrupted. In this work, I will present a new framework for nonstationary data analysis, which encompasses methods for the quantification and visualization of nonstationary processes. The analysis of data acquired in BCI experiments will be used to exemplify the power of the methods. In particular, I show some neurophysiological evidence for the sources of the nonstationarity. Once the underlying reasons for the nonstationarity are known, classification can be adaptively enhanced; I will present some surprisingly simple methods. Finally, I will construct classifiers that are largely robust against the changes from one experimental session to the next. This novel type of classifiers can be applied without initial calibration and has the potential to drastically improve the applicability of BCI devices for daily use. While the BCI scenario was used as a testbed for the framework, it can be applied to a wide range of problems. Nonstationarity can occur in any field of machine learning, whenever the measured systems under observation change their properties over time

    ROBUSTIFYING EEG DATA ANALYSIS BY REMOVING OUTLIERS

    No full text
    Biomedical signals such as EEG are typically contaminated by measurement artifacts, outliers and non-standard noise sources. We propose to use techniques from robust statistics and machine learning to reduce the influence of such distortions. Two showcase application scenarios are studied: (a) Lateralized Readiness Potential (LRP) analysis, where we show that a robust treatment of the EEG allows to reduce the necessary number of trials for averaging and the detrimental influence of e.g. ocular artifacts and (b) single trial classification in the context of Brain Computer Interfacing, where outlier removal procedures can strongly enhance the classification performance

    Covariate shift adaptation by importance weighted cross validation

    No full text
    A common assumption in supervised learning is that the input points in the training set follow the same probability distribution as the input points that will be given in the future test phase. However, this assumption is not satisfied, for example, when the outside of the training region is extrapolated. The situation where the training input points and test input points follow different distributions while the conditional distribution of output values given input points is unchanged is called the covariate shift. Under the covariate shift, standard model selection techniques such as cross validation do not work as desired since its unbiasedness is no longer maintained. In this paper, we propose a new method called importance weighted cross validation (IWCV), for which we prove its unbiasedness even under the covariate shift. The IWCV procedure is the only one that can be applied for unbiased classification under covariate shift, whereas alternatives to IWCV exist for regression. The usefulness of our proposed method is illustrated by simulations, and furthermore demonstrated in the brain-computer interface, where strong non-stationarity effects can be seen between training and test sessions

    Reducing calibration time for brain-computer interfaces: A clustering approach

    No full text
    Up to now even subjects that are experts in the use of machine learning based BCI systems still have to undergo a calibration session of about 20-30 min. From this data their (movement) intentions are so far infered. We now propose a new paradigm that allows to completely omit such calibration and instead transfer knowledge from prior sessions. To achieve this goal we first define normalized CSP features and distances in-between. Second, we derive prototypical features across sessions: (a) by clustering or (b) by feature concatenation methods. Finally, we construct a classifier based on these individualized prototypes and show that, indeed, classifiers can be successfully transferred to a new session for a number of subjects

    The Berlin Brain-Computer Interface: Report from the Feedback Sessions

    No full text
    Brain-Computer Interface (BCI) systems establish a direct communication channel from the brain to an output device. These systems use brain signals recorded from the scalp, the surface of the cortex, or from inside the brain to enable users to control a variety of applications. BCI systems that bypass conventional motor output pathways of nerves and muscles can provide novel control options for paralyzed patients. The classical approach to establish EEG-based control is to set up a system that is controlled by a specific EEG feature which is known to be susceptible to conditioning and to let the subjects learn the voluntary control of that feature. In contrast, the Berlin Brain-Computer Interface (BBCI) uses well established motor competences in control paradigms and a machine learning approach to extract subject-specific discriminability patterns from high-dimensional features. Thus the long subject training is replaced by a short calibration measurement (20 minutes) and machine training (1 minute). We report results from a study with six subjects who had no or little experience with BCI feedback. The experiment encompassed three kinds of feedback that were all controlled by voluntary brain signals, independent from peripheral nervous system activity and without resorting to evoked potentials. Two of the feedback protocols were asynchronous and one was synchronou
    corecore